"Classic First Amendment retaliation." That's how US District Judge Rita Lin described the Department of War's effort to blacklist Anthropic and designate it a supply-chain risk.
By all appearances, "these measures appear designed to punish Anthropic," Lin wrote in an order granting Anthropic's request for a preliminary injunction.
Officials seemingly had no authority to take such extreme actions without considering less restrictive alternatives or offering any evidence that Anthropic posed an urgent risk to national security, Lin said. Instead, "the Department of War’s records show that it designated Anthropic as a supply chain risk because of its 'hostile manner through the press.'"
"Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation," Lin said.
Anthropic's spokesperson told Ars the firm is "grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits." But Anthropic remains in a difficult position, still afraid that the fight will block it from competing for lucrative government contracts. In a blog earlier this month, Anthropic maintained that "Anthropic has much more in common with the Department of War than we have differences" and should be working together to deploy AI safely across government. Anthropic is still walking the same line in the aftermath of Lin's order.
"While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," Anthropic's spokesperson said.
DoW official calls order a "disgrace"
For Anthropic this fight could be existential. After the DoW's actions, three trade deals were promptly cancelled, while other potential partners delayed talks. The company showed it was already suffering irreparable harms that would only worsen the longer the blacklisting was upheld—including losing potentially billions in private and government contracts the company expected to sign over the next five years, Lin noted.
To prevent ongoing harms, Lin ordered a preliminary injunction blocking US agencies from complying with directives from Donald Trump and Secretary of War Pete Hegseth.
However, she also granted the government's request for an administrative stay, which delays the injunction from taking effect for seven days. That gives the government a brief window to seek an emergency stay from an appeals court.
Asked for comment, DoW pointed to statements on X from Under Secretary of War Emil Michael, who emphasized that the supply-chain risk designation still applies over the next week. Showing that Trump officials don't plan to back off the fight, Michael claimed that Lin's order was "a disgrace" and contained "factual errors" due to the judge's supposed rush to order the injunction. According to Michael, Lin did not fully consider how disrupting Hegseth's directive could "disrupt" how US military operations are conducted.
However, Lin cited a brief filed in support of Anthropic from military leaders, who warned that letting the directive stand "will materially detract from military readiness and operational safety." Anthropic has argued that its technology is not ready to be used for mass surveillance of Americans or in fully autonomous lethal weapons, potentially posing civil rights risks if leveraged now.
Lin noted that the case touched on "an important public debate," which is whether an AI company can dictate how the government uses its models. But it was not up to her to decide if AI firms or the government should be in charge of deciding what AI uses are safe for the public.
Instead, she had to rule on whether government officials violated Anthropic's First Amendment rights, denied Anthropic due process, or acted arbitrarily or capriciously. And at this stage of the case, Anthropic has shown enough to prove it's likely to succeed on all claims, she said.
DoW is not authorized to "designate a domestic vendor a supply chain risk simply because a vendor publicly criticized DoW’s views about the safe uses of its system," Lin wrote. In fact, "that designation has never been applied to a domestic company and is directed principally at foreign intelligence agencies, terrorists, and other hostile actors," she said.
"I don't know": Lawyer has no defense of Hegseth
The DoW began using Anthropic's Claude in March 2025 and had been using it for the past year without ever raising any concerns that Anthropic's terms limiting certain uses posed a national security risk, Lin said.
Rather, the government thoroughly vetted Claude before implementing it, praised Anthropic publicly, and planned to expand the partnership.
The amicable nature of the partnership only changed, the judge said, after DoW sought to deploy Claude on a military platform and Anthropic ultimately agreed to do so with "two critical exceptions: mass surveillance of Americans and lethal autonomous warfare."
Based on its testing, Anthropic could not guarantee that Americans' civil rights would not be infringed if Claude was used for these purposes, Anthropic said. If the government disliked the terms, Anthropic repeatedly said it would understand if another vendor was selected, simply bowing out to avoid compromising on AI safety principles that might "undercut Anthropic's core identity," Lin wrote.
Calling out Anthropic for "utopian idealism," DoW officials blasted Anthropic for supposedly trying to get the government to let a private company decide how military operations go down.
"You can’t have an AI company sell AI to the Department of War and [then say] don’t let it do Department of War things," Michael told the press.
They accused Anthropic of trying to use the DoW dispute to spin up positive press, and Trump joined the chorus. On Truth Social, he labeled Anthropic a "radical left, woke company," allegedly putting their "selfishness" above national security. Following Trump's post, Hegseth took to X, writing that "Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon."
In both posts, officials claimed that orders to blacklist Anthropic were effective immediately, but neither cited what authority they had to do so.
During oral arguments, a government lawyer later admitted that "he was not aware of any statute that gave Secretary Hegseth the authority to issue such a prohibition and agreed that the statement had 'absolutely no legal effect at all,'" Lin wrote. Further, "when asked why Hegseth made a public statement that had no legal effect and that did not reflect the immediate intent of DoW, counsel stated, 'I don’t know.'"
Perhaps most glaringly, Hegseth seemed to contradict himself when arguing that Anthropic at once "presented a grave threat to national security" requiring a supply-chain risk designation and also "Anthropic was essential to national security" and could be compelled to provide services under the Defense Production Act.
The only reason that the government gave for labeling Anthropic a national security risk was that the company could supposedly update their products and compromise systems. They claimed that Anthropic would be motivated to sabotage the military as retaliation for the directives.
But Lin didn't find that likely, either, since any other IT provider could potentially introduce the same risks. More importantly, Anthropic showed unrebutted evidence that it would be impossible to force updates or otherwise control the government's systems. To the judge, any national security risk could be foreclosed by simply ending the military's contract with Anthropic, which Anthropic had already agreed would be understandable.
Lacking any statutory basis, it seemed clear from officials' statements that Anthropic was being punished for publicly criticizing the military's plans, the judge concluded.
As Anthropic alleged, "defendants set out to publicly punish Anthropic for its 'ideology' and 'rhetoric,' as well as its 'arrogance' for being unwilling to compromise those beliefs," the judge said. "Secretary Hegseth expressly tied Anthropic’s punishment to its attitude and rhetoric in the press."
Anthropic retaliation "deeply troubling," judge says
On top of rushing to shut down government contracts with Anthropic and influence its commercial deals with any business that also hoped to work with DoW, Hegseth also failed to give Anthropic an opportunity to defend itself from claims before taking action that the record shows wasn't urgent.
Civil rights and public safety advocates had urged the court to block the government's actions or else risk a chilling effect preventing any AI firm from speaking up about unsafe government AI uses.
Ultimately, Lin agreed that any time the government raised a red flag that a vendor was an "adversary," it was "deeply troubling."
That could "chill open deliberation" and "professional debate" among those "best positioned to understand AI technology" and its potential for "catastrophic misuse," the judge wrote.
As the government likely moves to try to block the preliminary injunction, their argument contends that Lin's order could force the government to pay for Anthropic products and never get that money back. They also claimed they were conducting an audit to see if any security risks currently exist that could justify the supply-chain risk designation.
Lin doesn't see it that way, though. She wrote that "the preliminary injunctive relief that the Court authorizes does not require the government to continue to use Claude on its national security systems." She also noted that "the government 'cannot suffer harm from an injunction that merely ends an unlawful practice.'"
Read full article
Comments